algorithmic decision-making
The Neutrality Fallacy: When Algorithmic Fairness Interventions are (Not) Positive Action
Weerts, Hilde, Xenidis, Raphaële, Tarissan, Fabien, Olsen, Henrik Palmer, Pechenizkiy, Mykola
Various metrics and interventions have been developed to identify and mitigate unfair outputs of machine learning systems. While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law. As the Court of Justice of the European Union has been strict when it comes to assessing the lawfulness of positive action, this would impose a significant legal burden on those wishing to implement fair-ml interventions. In this paper, we propose that algorithmic fairness interventions often should be interpreted as a means to prevent discrimination, rather than a measure of positive action. Specifically, we suggest that this category mistake can often be attributed to neutrality fallacies: faulty assumptions regarding the neutrality of fairness-aware algorithmic decision-making. Our findings raise the question of whether a negative obligation to refrain from discrimination is sufficient in the context of algorithmic decision-making. Consequently, we suggest moving away from a duty to 'not do harm' towards a positive obligation to actively 'do no harm' as a more adequate framework for algorithmic decision-making and fair ml-interventions.
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.05)
- Europe > France (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
- (12 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Banking & Finance (0.93)
- Health & Medicine > Therapeutic Area > Oncology (0.67)
- (3 more...)
"This is not a data problem": Algorithms and Power in Public Higher Education in Canada
Algorithmic decision-making is increasingly being adopted across public higher education. The expansion of data-driven practices by post-secondary institutions has occurred in parallel with the adoption of New Public Management approaches by neoliberal administrations. In this study, we conduct a qualitative analysis of an in-depth ethnographic case study of data and algorithms in use at a public college in Ontario, Canada. We identify the data, algorithms, and outcomes in use at the college. We assess how the college's processes and relationships support those outcomes and the different stakeholders' perceptions of the college's data-driven systems. In addition, we find that the growing reliance on algorithmic decisions leads to increased student surveillance, exacerbation of existing inequities, and the automation of the faculty-student relationship. Finally, we identify a cycle of increased institutional power perpetuated by algorithmic decision-making, and driven by a push towards financial sustainability.
- North America > Canada > Ontario > Toronto (0.46)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- (13 more...)
Fair Off-Policy Learning from Observational Data
Frauen, Dennis, Melnychuk, Valentyn, Feuerriegel, Stefan
Algorithmic decision-making in practice must be fair for legal, ethical, and societal reasons. To achieve this, prior research has contributed various approaches that ensure fairness in machine learning predictions, while comparatively little effort has focused on fairness in decision-making, specifically off-policy learning. In this paper, we propose a novel framework for fair off-policy learning: we learn decision rules from observational data under different notions of fairness, where we explicitly assume that observational data were collected under a different - potentially discriminatory - behavioral policy. We then propose a neural networkbased framework to learn optimal policies under different fairness notions. We further provide theoretical guarantees in the form of generalization bounds for the finite-sample version of our framework. We demonstrate the effectiveness of our framework through extensive numerical experiments using both simulated and real-world data. Altogether, our work enables algorithmic decision-making in a wide array of practical applications where fairness must be ensured. Algorithmic decision-making in practice must avoid discrimination and thus be fair to meet legal, ethical, and societal demands (Nkonde, 2019; De-Arteaga et al., 2022; Corbett-Davies et al., 2023). For example, in the U.S., the Fair Housing Act and Equal Credit Opportunity Act stipulate that decisions must not be subject to systematic discrimination by gender, race, or other attributes deemed as sensitive. However, research from different areas has provided repeated evidence that algorithmic decisionmaking is often not fair. A prominent example is Amazon's tool for automatically screening job applicants that was used between 2014 and 2017 (Dastin, 2018). It was later discovered that the underlying algorithm generated decisions that were subject to systematic discrimination against women and thus resulted in a ceteris paribus lower probability of women being hired. Ensuring fairness in off-policy learning is subject to inherent challenges.
- North America > United States > Oregon (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > New York (0.04)
- (2 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (0.48)
Applying Interdisciplinary Frameworks to Understand Algorithmic Decision-Making
Schmude, Timothée, Koesten, Laura, Möller, Torsten, Tschiatschek, Sebastian
Well-known examples of such "high-risk" [6] systems can be found in recidivism prediction [5], refugee resettlement [3], and public employment [19]. Many authors have outlined that faulty or biased predictions by ADM systems can have far-reaching consequences, including discrimination [5], inaccurate predictions [4], and overreliance on automated decisions [2]. Therefore, high-level guidelines are meant to prevent these issues by pointing out ways to develop trustworthy and ethical AI [10, 22]. However, practically applying these guidelines remains challenging, since the meaning and priority of ethical values shift depending on who is asked [11]. Recent work in Explainable Artificial Intelligence (XAI) thus suggests equipping individuals who are involved with an ADM system and carry responsibility--so-called "stakeholders"--with the means of assessing the system themselves, i.e. enabling users, deployers, and affected individuals to independently check the system's ethical values [14]. Arguably, a pronounced understanding of the system is necessary for making such an assessment. While numerous XAI studies have examined how explaining an ADM system can increase stakeholders' understanding [20, 21], we highlight two aspects that remain an open challenge: i) the amounts of resources needed to produce and test domain-specific explanations and ii) the difficulty of creating and evaluating understanding for a large variety of people. Further, it is important to note that, despite our reference to "Explainable AI," ADM is not constrained to AI, and indeed might encompass a broader problem space. Despite the emphasis on "understanding" in XAI research, the field features only a few studies that introduce learning frameworks from other disciplines.
- Government (0.92)
- Education (0.71)
AI regulation: A state-by-state roundup of AI bills
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Wondering where AI regulation stands in your state? Today, the Electronic Privacy Information Center (EPIC) released The State of State AI Policy, a roundup of AI-related bills at the state and local level that were passed, introduced or failed in the 2021-2022 legislative session (EPIC gave VentureBeat permission to reprint the full roundup below). Within the past year, according to the document (which was compiled by summer clerk Caroline Kraczon), states and localities have passed or introduced bills "regulating artificial intelligence or establishing commissions or task forces to seek transparency about the use of AI in their state or locality."
- North America > United States > California > San Francisco County > San Francisco (0.16)
- North America > United States > Vermont (0.07)
- North America > United States > Illinois (0.07)
- (6 more...)
- Law > Statutes (1.00)
- Government (1.00)
AI regulation: A state-by-state roundup of AI bills
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Wondering where AI regulation stands in your state? Today, the Electronic Privacy Information Center (EPIC) released The State of State AI Policy, a roundup of AI-related bills at the state and local level that were passed, introduced or failed in the 2021-2022 legislative session. Within the past year, according to the document, states and localities have passed or introduced bills "regulating artificial intelligence or establishing commissions or task forces to seek transparency about the use of AI in their state or locality."
- North America > United States > Colorado (0.08)
- North America > United States > Vermont (0.07)
- North America > United States > Illinois (0.07)
- (6 more...)
- Law > Statutes (1.00)
- Government (1.00)
When Is AI Actually Explainable? - AI Summary
It covers a research field where a wide variety of experts come together: mathematicians, engineers, psychologists, philosophers and regulators, which makes it one of the most interesting. People make noisy, inconsistent decisions based on assumptions that are sometimes hard to check, and so does AI. The European Commission has recently published a proposal for what is going to be the first attempt ever to insert AI systems and their use in a coherent legal framework. The proposal explicitly refers to the risk of AI systems not being explainable and possibly being biased as a result. To me (and the people at Deeploy), explainability of AI means that everyone involved in algorithmic decision-making should be able to understand well enough how decisions have been made.
When is AI actually explainable?
Explainability is a fascinating topic. It covers a research field where a wide variety of experts come together: mathematicians, engineers, psychologists, philosophers and regulators, which makes it one of the most interesting. I have been involved in quite some AI projects where explainability -- or XAI -- turned out to be crucial. So, I decided to gather and share my experiences, and the experiences of my colleagues at Deeploy. AI is one of the biggest innovations of our time. It can change the way we live, work, care, teach and interact with each other.
How To Make Ethical Use Of AI In The Hiring Process
Global corporate investments in AI are projected to double to around $110 billion over the next two years. One of the main areas of application for organizational AI is recruitment (e.g., staffing, hiring, employee selection, etc.), not least because human recruiters waste a great deal of time entering data, sorting through resumes, and making imprecise inferences about candidates' talent and potential. Across countries and industries, big firms, such as IKEA, Unilever, Intel and Vodafone rely on algorithmic decision-making in their recruitment processes. It is noteworthy that many of the most popular traditional hiring methods have very low accuracy, and attempts to evaluate other people's job suitability are typically contaminated by the usual stereotypes and prejudices that undermine human objectivity and reduce our ability to understand others. Unsurprisingly, many recruiters see technological innovations, such as machine-learning algorithms, as a big time saver, and emerging academic research suggests that AI can also increase organizations' efforts to accurately predict employee job performance and select the right person for the right role, as well as increasing fairness, transparency, and consistency.
Podcast: Can AI fix your credit?
Credit scores have been used for decades to assess consumer creditworthiness, but their scope is far greater now that they are powered by algorithms. Not only do they consider vastly more data, in both volume and type, but they increasingly affect whether you can buy a car, rent an apartment, or get a full-time job. In this second of a series on automation and our wallets, we explore just how much the machines that determine our credit worthiness have come to affect far more than our financial lives. This episode was produced by Jennifer Strong, Karen Hao, Emma Cillekens and Anthony Green. Miriam: It was not uncommon to be locked out of our hotel room or to have a key not work and him have to go down to the front desk and handle it. And it was not uncommon to pay a bill at a restaurant and then have the check come back. Jennifer: We're going to call this woman Miriam to protect her privacy.
- Banking & Finance > Credit (1.00)
- Government > Regional Government > North America Government > United States Government (0.94)
- Information Technology > Communications > Mobile (0.50)
- Information Technology > Artificial Intelligence > Machine Learning (0.49)